BookmarkSubscribeRSS Feed

Creating a Model Nutrition Label: Model Cards for Python Models

Started ‎08-16-2024 by
Modified yesterday by
Views 538

Model cards were introduced with the SAS Viya 2024.07 release and act as a nutrition label for AI models. The model cards in SAS Viya feature easy-to-understand visuals and actionable takeaways. When designing the model card, SAS made sure it contained useful and digestible information for a variety of stakeholders and would become a natural extension of the model’s lifecycle. As you develop and manage models within SAS Viya, the model card starts to populate. But what about Python models developed in environments outside of SAS Viya? Don’t worry, we didn’t forget about our Python modelers! In Part 1 of this series, we explored creating a model card for models in SAS Model Studio, and in this article, we will focus on Python models developed outside of SAS Viya. To build a complete model card, we will use the python-sasctl package and tools on SAS Viya. This example notebook includes the code you need to generate the files for the model card.

  

Python model card.png

 

 Model Card example

 

 

Getting Everything in SAS Model Manager

 

SAS Viya cannot automatically track actions taken outside of the platform, but data scientists training their models in their Python environments outside of SAS Viya can use several functions in python-sasctl to generate the files required for SAS Model Manager and the model card. They can also import their model into SAS Model Manager directly from their Python notebook or code. Once they are happy with their trained model, these are the functions from python-sasctl our Python-wielding data scientists will need to run:

  1. pzmm.PickleModel.pickle_trained_model() to create a binary representation (pickle) of their model.
  2. pzmm.JSONFiles.write_var_json() using their input data to write out metadata about the variables the model expects as inputs.
  3. pzmm.JSONFiles.write_var_json() using their output data to write out metadata about the variables the model generates as outputs.
  4. pzmm.JSONFiles.write_model_properties_json() to save key information about the model, including target, algorithm, description, and modeler.
  5. pzmm.JSONFiles.write_file_metadata_json() for file assignments.
  6. pzmm.JSONFiles.calculate_model_statistics() to save model performance metrics at the time of training.
  7. pzmm.JSONFiles.assess_model_bias() to examine the model’s performance and prediction across levels of a potentially sensitive variable. This step won’t apply if there are no variables available for bias assessment, but you can learn more about his function in this notebook.
  8. pzmm.JSONFiles.generate_model_card() to generate the remaining files needed by the model card.
  9. pzmm.JSONFiles.create_requirements_json() to specify the dependencies required for the Python model. This step is optional but recommended for easy dependency management. You can learn more about this function in this notebook.
  10. pzmm.ImportModel.import_model() to write the model score code and register the files you created into SAS Model Manager. Be sure to specify a project using the project parameter! Optionally, you can write your own score code, zip all the files up using pzmm.ZipModel.zip_files and import them using model_repository.import_model_from_zip.  

Now, you should see your Python model in SAS Model Manager and a nearly completed model card! We'll walk through each section below and highlight any remaining tasks to complete the card. 

 

The Model Card

 

The first step for making your model card is to register your model into a project. The pzmm.ImportModel.import_model() imports models into SAS Model Manager directly from a Python notebook or code file. This function has a project parameter where a user can specify which project the model should be registered. If the project doesn’t exist, python-sasctl will automatically create a project with the given name and register the model to it. Once your models are registered, you can open your models in SAS Model Manager.

 

The Model Card will appear as the first tab of a model instance inside SAS Model Manager when the model function is prediction or classification. You can set the model function using the target type parameter of the pzmm.JSONFiles.generate_model_card() function, but you can also update the property in SAS Model Manager directly.

 

Left-Hand Pane

 

The left-hand pane contains three sets of information: tags, the modeler, and the responsible party.  The modeler is specified using the modeler parameter of the pzmm.JSONFiles.write_model_properties_json()  function, but you can update the modeler and add custom tags to the model in the Properties tab of the model. The responsible party is the user or group responsible for the model and this field is populated at the project level. A link to the project is available near the top of the screen. Within the project, navigate to the Properties Tab and then the Model Usage section. You can update the responsible party here as well as any model usage fields.

 

SophiaRowland_1-1721219476505.png

 Updating the Model Card left-hand pane

 

Overview

 

The Overview section provides an at-a-glance review of the model. It provides visuals of model health that are easy to understand and are supported by other sections of the card. This tab reports on model performance during training and over time. It also reports on influential variables, variable privacy classifications, and completeness of model card. Most of the data in the Overview section will be populated using the files created earlier,  but there are areas that will require attention when first registered.

 

If you are noticing a blue warning about the thresholds for your training metrics, you can review and update the thresholds for action in the project properties under model evaluation. For the performance monitoring metrics to become available, complete the steps listed in the Model Audit section of this article. To ensure the variable privacy classifications come through, scroll down to the Data Summary and complete the steps I’ve listed in the corresponding section below. If you want the “No” to change to a “Yes” in the Limitations Documented block, then complete the Model Usage section of the card, as outlined in the next section of this article. Finally, to see fairness metrics, you needed to run pzmm.JSONFiles.assess_model_bias() .

 

This Overview section can help provide evidence that your model is in good health or direct your attention towards areas that need attention.

 

Model Usage

 

The Model Usage section describes the intended usage, expected benefits, out-of-scope use cases, and limitations of the model. This section must be filled out manually at either the project or model level in the Properties tab. The values of the model usage are inherited from the project-level properties. However, model-specific information can be specified for each property value at the model level.

 

SophiaRowland_2-1721219853585.png

 Updating the Model Card Model Usage section

 

Data Summary

 

The Data Summary section provides information about the training data from SAS Information Catalog.  When registering your model from Python, the training data property will be set automatically if you specify the train_data parameter using the pzmm.JSONFiles.generate_model_card()  function, but you may be prompted to run an analysis in the Data Summary section. If you see a button in this section, click it to run your analysis.

 

Once the analysis is complete, you should see a summary courtesy of SAS Information Catalog. This summary contains the number of columns, number or rows, size, status, completeness of data, information privacy classifications, data tags, and data descriptions. If there are gaps in the description, tags, and status, then these can be corrected in SAS Information Catalog. Working with data in SAS Information Catalog may require advanced permissions, so work with your data engineers or data owners to ensure your data metadata is complete. Having complete data about your data is a best practice for building trustworthy models!

 

SophiaRowland_3-1721220042594.png

 Updating the Model Card Data Summary section

Model Summary

 

The Model Summary section examines the model’s performance at the time of training, which corresponds to the training donut charts in the Overview tab. The information in this tab is populated by running the pzmm.JSONFiles.calculate_model_statistics()  function.  The Model Summary section includes information about the model target, algorithm, development tool and version, various accuracy measures across training, testing, and validation splits, generalizability, and variable importance. If you’ve also run the pzmm.JSONFiles.assess_model_bias(), fainress metrics will also appear in this tab. Overall, this tab provides a ton of information about how well the model was performing during training which can provide a baseline for monitoring model performance over time. 

 

Model Summary Gif.gif

 

  Complete Model Card Summary section

 

Model Audit

 

While the Model Summary section focuses on the model’s performance at the time of training, the Model Audit section reports on performance over time.  The Model Audit section also provides a deeper dive into the performance monitoring donut charts in the Overview section.

 

The Model Audit section relies on two capabilities of SAS Model Manager: performance monitoring and Key Performance Indicator (KPI) rules.

You will set your KPI thresholds in the project properties under model evaluation. You can create a rule in just a few clicks, as outlined in the documentation or the quick demo below:

 

Creating KPI alert rules

 

Performance monitoring reviews model performance over time in batch at user-defined time points. To create a performance monitor report, you need data with the actual or ground-truth values, and SAS Model Manager graphs model performance over time. Without the ground-truth, you will only get metrics on data drift. You can create a performance monitoring report in a few moments from the project, as outlined in these steps or the demo video below:

 

Creating and running a performance monitoring definition 

 

Now, you can view the latest model accuracy, fairness, and model drift metrics against your thresholds on the latest run of performance monitoring.

 

SophiaRowland_0-1721222057321.png

 

Creating a complete Model Card

 

Pulling it All Together

 

Even if your Python model was trained outside of SAS Viya, you can still build a complete model card! To learn more, visit the example notebook and see an example of each function called out in the article. Now you should have everything you need to create a complete nutrition label for your Python model. Post your questions, feedback, and ideas in the comments below!

 

Version history
Last update:
yesterday
Updated by:

SAS Innovate 2025: Save the Date

 SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!

Save the date!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags